24 research outputs found

    AudExpCreator: A GUI-based Matlab tool for designing and creating auditory experiments with the Psychophysics Toolbox

    Full text link
    We present AudExpCreator, a GUI-based Matlab tool for designing and creating auditory experiments. AudExpCreator allows users to generate auditory experiments that run on Matlab's Psychophysics Toolbox without having to write any code; rather, users simply follow instructions in GUIs to specify desired design parameters. The software comprises five auditory study types, including behavioral studies and integration with EEG and physiological response collection systems. Advanced features permit more complicated experimental designs as well as maintenance and update of previously created experiments. AudExpCreator alleviates programming barriers while providing a free, open-source alternative to commercial experimental design software.Comment: 15 pages, 6 figure

    Inter-subject Correlation While Listening to Minimalist Music: A Study of Electrophysiological and Behavioral Responses to Steve Reich’s Piano Phase

    Full text link
    Musical minimalism utilizes the temporal manipulation of restricted collections of rhythmic, melodic, and/or harmonic materials. One example, Steve Reich’s Piano Phase, offers listeners readily audible formal structure with unpredictable events at the local level. For example, pattern recurrences may generate strong expectations which are violated by small temporal and pitch deviations. A hyper-detailed listening strategy prompted by these minute deviations stands in contrast to the type of listening engagement typically cultivated around functional tonal Western music. Recent research has suggested that the inter-subject correlation (ISC) of electroencephalographic (EEG) responses to natural audio-visual stimuli objectively indexes a state of “engagement,” demonstrating the potential of this approach for analyzing music listening. But can ISCs capture engagement with minimalist music, which features less obvious expectation formation and has historically received a wide range of reactions? To approach this question, we collected EEG and continuous behavioral (CB) data while 30 adults listened to an excerpt from Steve Reich’s Piano Phase, as well as three controlled manipulations and a popular-music remix of the work. Our analyses reveal that EEG and CB ISC are highest for the remix stimulus and lowest for our most repetitive manipulation, no statistical differences in overall EEG ISC between our most musically meaningful manipulations and Reich’s original piece, and evidence that compositional features drove engagement in time-resolved ISC analyses. We also found that aesthetic evaluations corresponded well with overall EEG ISC. Finally we highlight co-occurrences between stimulus events and time-resolved EEG and CB ISC. We offer the CB paradigm as a useful analysis measure and note the value of minimalist compositions as a limit case for the neuroscientific study of music listening. Overall, our participants’ neural, continuous behavioral, and question responses showed strong similarities that may help refine our understanding of the type of engagement indexed by ISC for musical stimuli

    Asymmetrical Reinforcement and Wolbachia Infection in Drosophila

    Get PDF
    Reinforcement refers to the evolution of increased mating discrimination against heterospecific individuals in zones of geographic overlap and can be considered a final stage in the speciation process. One the factors that may affect reinforcement is the degree to which hybrid matings result in the permanent loss of genes from a species' gene pool. Matings between females of Drosophila subquinaria and males of D. recens result in high levels of offspring mortality, due to interspecific cytoplasmic incompatibility caused by Wolbachia infection of D. recens. Such hybrid inviability is not manifested in matings between D. recens females and D. subquinaria males. Here we ask whether the asymmetrical hybrid inviability is associated with a corresponding asymmetry in the level of reinforcement. The geographic ranges of D. recens and D. subquinaria were found to overlap across a broad belt of boreal forest in central Canada. Females of D. subquinaria from the zone of sympatry exhibit much stronger levels of discrimination against males of D. recens than do females from allopatric populations. In contrast, such reproductive character displacement is not evident in D. recens, consistent with the expected effects of unidirectional cytoplasmic incompatibility. Furthermore, there is substantial behavioral isolation within D. subquinaria, because females from populations sympatric with D. recens discriminate against allopatric conspecific males, whereas females from populations allopatric with D. recens show no discrimination against any conspecific males. Patterns of general genetic differentiation among populations are not consistent with patterns of behavioral discrimination, which suggests that the behavioral isolation within D. subquinaria results from selection against mating with Wolbachia-infected D. recens. Interspecific cytoplasmic incompatibility may contribute not only to post-mating isolation, an effect already widely recognized, but also to reinforcement, particularly in the uninfected species. The resulting reproductive character displacement not only increases behavioral isolation from the Wolbachia-infected species, but may also lead to behavioral isolation between populations of the uninfected species. Given the widespread occurrence of Wolbachia among insects, it thus appears that there are multiple ways by which these endosymbionts may directly and indirectly contribute to reproductive isolation and speciation

    Factors influencing classification of frequency following responses to speech and music stimuli.

    No full text
    Successful mapping of meaningful labels to sound input requires accurate representation of that sound’s acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a “leave-one-out” approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject’s results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice

    Factors influencing classification of frequency following responses to speech and music stimuli

    No full text
    Successful mapping of meaningful labels to sound input requires accurate representation of that sound’s acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a “leave-one-out” approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject’s results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice
    corecore